Search Results: "lumin"

14 October 2011

Alastair McKinstry: Irony, part 2. #physics, #irony https://www.technologyreview.com/blog/arxiv/27260/ It turns out that the 'superluminal' neutrinos effect spotted by the OPERA experiment in CERN/ Gran Sasso was probably due to an error in neglecting to account for relativistic effects in the GPS satellite movement. Far from debunking relativity, it proves it.

Irony, part 2. #physics, #irony https://www.technologyreview.com/blog/arxiv/27260/ It turns out that the 'superluminal' neutrinos effect spotted by the OPERA experiment in CERN/ Gran Sasso was probably due to an error in neglecting to account for relativistic effects in the GPS satellite movement. Far from debunking relativity, it proves it.

13 October 2011

Alastair McKinstry: #### [WATER FRACTIONS IN EXTRASOLAR PLANETESIMALS](http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.1774v1.pdf) #### Jura & Xu, Arxiv.org. #exoplanets, #water, #phd > Abstract: With the goal of using externally-polluted white dwarfs to investigate the water fractions of extrasolar planetesimals, we assemble from the literature a sample that we estimate to be more than 60% complete of DB white dwarfs warmer than 13,000 K, more luminous than 3 10 3 L and within 80 pc of the Sun. When considering all the stars together, we find the summed mass accretion rate of heavy atoms exceeds that of hydrogen by over a factor of 1000. If so, this sub-population of extrasolar asteroids treated as an ensemble has little water and is at least a factor of 20 drier than CI chondrites, the most primitive meteorites. In contrast, while an apparent excess of oxygen in a single DB can be interpreted as evidence that the accreted material originated in a water-rich parent body, we show that at least in some cases, there can be sufficient uncertainties in the time history of the accretion rate that such an argument may be ambiguous. Regardless of the difficulty associated with interpreting the results from an individual object, our analysis of the population of polluted DBs provides indirect observational support for the theoretical view that a snow line is important in disks where rocky planetesimals form. Ok, so we now have a way of estimating how wet planetary systems *were*, at the time they fell into the white dwarf star. Not very representative of the time they before the planet got roasted to a crisp by the stars Red Giant stage, however. Recovering the hydrogenation state of serpentinized minerals might be possible and useful.

#### [WATER FRACTIONS IN EXTRASOLAR PLANETESIMALS](http://arxiv.org/PS_cache/arxiv/pdf/1110/1110.1774v1.pdf) #### Jura & Xu, Arxiv.org. #exoplanets, #water, #phd > Abstract: With the goal of using externally-polluted white dwarfs to investigate the water fractions of extrasolar planetesimals, we assemble from the literature a sample that we estimate to be more than 60% complete of DB white dwarfs warmer than 13,000 K, more luminous than 3 10 3 L and within 80 pc of the Sun. When considering all the stars together, we find the summed mass accretion rate of heavy atoms exceeds that of hydrogen by over a factor of 1000. If so, this sub-population of extrasolar asteroids treated as an ensemble has little water and is at least a factor of 20 drier than CI chondrites, the most primitive meteorites. In contrast, while an apparent excess of oxygen in a single DB can be interpreted as evidence that the accreted material originated in a water-rich parent body, we show that at least in some cases, there can be sufficient uncertainties in the time history of the accretion rate that such an argument may be ambiguous. Regardless of the difficulty associated with interpreting the results from an individual object, our analysis of the population of polluted DBs provides indirect observational support for the theoretical view that a snow line is important in disks where rocky planetesimals form. Ok, so we now have a way of estimating how wet planetary systems *were*, at the time they fell into the white dwarf star. Not very representative of the time they before the planet got roasted to a crisp by the stars Red Giant stage, however. Recovering the hydrogenation state of serpentinized minerals might be possible and useful.

28 March 2011

Charles Plessy: Debian on an iMac

For the second time, I chose an iMac as a computer at work. I like a lot its simplicity. Only one cable, few options, a big screen and that's all. The other manufacturer where it is easy to order has a complex offer. Is the computer for playing ? for the office ? at home or in a company ? A public one, a private one ? I nervously installed Debian as soon as the machine arrived. What if it did not work, would I have to use OS X for the next three or four years ? Luckily, everything went well, and the procedure is much simpler than it looks: install rEFIt, shrink the system partition with diskutil, add two partitions for Debian and its virtual memory with Disk Utility, and restart on the Squeeze installation CD-ROM, be guided by the installer, make sure to install the GRUB on the Debian partition, not on the master boot record. Just in case I would suddenly need it, I also installed the proprietary ATI drivers, that provide graphical acceleration. I then installed Grid Engine, to take the best out of the hyper-threaded four cores of the processor. The README.Debian file is very clear and allows a simple configuration in a couple of minutes. Being a beginner, I still had to try two times, because at the beginning I was using localhost instead of a more proper name for the machine. It also took me some time to find two key parameters: slots for executing multiple jobs simultaneoutly, and priority to avoid the calculations freeze my desktop, because it is still just a desktop computer. I do not know if under OS X recent iMacs heat that much, but in these times of heater restrictions, I can conveniently heat my hands on the aluminum case of the machine. However, I am a bit worried for this summer, when restrictions will be on air conditioning.

17 December 2010

Alexander Reichle-Schmehl: Christmas for network administrators (Part 2)

After hearing several times, that it would make sense to use fibre cables for your Christmas tree, as you can illuminate it with them, I have been pointed to traceable network cords... which made me speechless...

12 December 2010

Neil Williams: Free software under the bus

Sometimes, free software isn't better than proprietary, it's true - and one of the principal reasons is that the majority of free software does not use the full power of the freedom of the software. Far too many free software projects are single-developer projects. Many single-developer projects at SourceForge or Savannah and elsewhere have a single maintainer for the distribution providing the packages too and some (including nearly all of my own) have one person doing both jobs. In other words, there is no collaboration upstream or in packaging, so there is little or no benefit to the code of being free software...

It begs the question of what happens when the omnipresent developer-under-a-bus risk actually bites.

When comparing free software and proprietary, considering end-of-life effects can be very illuminating, especially when considering the interests of third-parties who come to rely upon the output of the project. When a proprietary software producer declares a software product to be end-of-life (whether due to collapse/purchase of the company or some other restructuring) third parties are left high and dry. In effect, all proprietary software is directly equivalent to a single developer writing free software. The more important the software, the worse this gets because at least if the single free software developer does disappear, the third party can try and pick up the pieces.

The bigger problem for free software is the common failure to actually collaborate. This has been my main upstream bug bear for a long time now. The model must change. New contributors must be dissuaded from writing new code just because they prefer one language over another. Distributions need to change their guidelines for new developers to actively discourage new packagers from starting with new upstream code. We are in danger of undermining the main appeal of free software by not stamping harder on the NotInventedHere (NIH) syndrome. NIH is pernicious, NIH is dangerous and there is even an argument for considering NIH as an anti-feature.
New is considered harmful
New is not necessarily a good idea, new can be positively harmful to the interests of the wider software community. New promotes reinventing the wheel. It's another incidence of "Just because we can, does not mean we should."

If free software advocates neglect to counteract the poison of NIH, then free software will lose the argument by losing the fundamental merit of free software. A freedom which is not utilised is akin to not having that freedom at all. Yes, someone can pick up a free software project after the single developer abandons it but, in practise, what actually happens is that someone else comes along whilst the single developer is struggling along alone and decides to fall prey to NIH simply because the original is written in C and their preferred language is Python or Ruby or Haskell or whatever.

The "choice" argument is simply invalid - providing another implementation of program foo in a different language is just a way of reinventing the bugs already fixed in foo in new ways in a new language - and then adding some more which are unique to that language. Yes, the new developer might argue that the new code is more active but for how long? Long enough to become just as stable as the abandoned code? What benefit is that? In a few years time we will have two abandoned codebases in two different languages written by two different developers who never collaborated on the actual problem both of them were trying to solve independently. Lunacy.

The dilemma for distributions like Debian is that the list of Debian Packages that Need Lovin' is a burden to Debian during a release freeze yet provides an ideal launchpad for new developers to refresh stale code. I've long thought that Debian needs to be more strict on removing such packages, yet at the same time I cannot help but feel that this tends to encourage more NIH behaviour.

It's a common mantra amongst free software developers that many eyes make all bugs shallow. The problem for free software is that outside certain key teams, there are simply too few eyes and the community is not doing enough to shine a light on existing code before dancing to the sound of the NIH drum.

Far too much free software is at risk from that omnipresent bus called "RealLife" and this makes far too much free software little better than proprietary software. Pandering to the fashions of NIH only makes things worse.

This isn't a technical problem that can be solved with a new cool tool (which itself is subject to the NIH poison), it is a social problem and, sadly, the free software community has a history of failing to tackle purely social problems successfully.

There is a small role for existing technical solutions to make it easier to spot NIH tendencies, including sites like SourceForge and Savannah having greater scrutiny about new projects and distributors like Debian being more averse to adding yet more ITP bugs (or closing them automatically unless something happens within a month). Fundamentally, it comes down to how new and existing members of the free software community decide to handle their own projects and bugs. The model of "do it this way because that's more fun" has got us so far, the challenge is to remove NIH tendencies without removing that fun.

23 November 2010

Julien Danjou: Color contrast correction

I finally took some time to finish my color contrast corrector. It's now able to compare two colors and to tell if they are readable when used as foreground and background color for text rendering. If they are too close, the code corrects both colors so to they'll become distant enough to be readable. To do that, it uses color coordinates in the CIE L*a*b* colorspace. This allows to determine the luminance difference between 2 colors very easily by comparing the L component of the coordinates. The default threshold used to determine readability based on luminance difference is 40 (on 100), which seems to give pretty good results so far. Then it uses the CIE Delta E 2000 formula to obtain the distance between colors. A distance of 6 is considered to be enough for the colors to be distinctive in our case, but that can be adjusted anyway. That depends on reader's eyes. If both the color and luminance distances are big enough, the color pair is considered readable when used upon each other. If these criteria are not satisfied, the code simply tries to correct the color by adjusting the L (luminance) component of the colors so their difference is 40. Optionally, the background color can be fixed so only the foreground color would be adjusted; this is especially handy when the color background is not provided by any external style, but it the screen one (like the Emacs frame background in my case). Here is an example result generated over 10 pairs of random colors. Left colors are randomly generated, and right colors are the corrected one. <style type="text/css"> </style>

bg: DarkSeaGreen4 fg: gray67 ->                             fg: #4a6b4b bg: #cccccc
bg: SlateGray4 fg: forest green ->                          fg: #9faec0 bg: #005700
bg: grey13 fg: grey36 ->                                    fg: #131313 bg: #6c6c6c
bg: MediumPurple2 fg: honeydew ->                           fg: #9e78ed bg: #f0fff0
bg: grey43 fg: chartreuse3 ->                               fg: #5e5e5e bg: #79de25
bg: linen fg: DeepPink2 ->                                  fg: linen bg: DeepPink2
bg: CadetBlue4 fg: blue1 ->                                 fg: #6c9fa4 bg: #0000e1
bg: gray33 fg: NavajoWhite3 ->                              fg: #525252 bg: #cfb58c
bg: chartreuse1 fg: RosyBrown3 ->                           fg: #9cff38 bg: #b28282
bg: medium violet red fg: DeepPink1 ->                      fg: #9c0060 bg: #ff55b9
All this has been written in Emacs Lisp. The code is now available in Gnus (and therefore in Emacs 24) in the packages color-lab and shr-color. A future work would be to add support for colour blindness. As a side note, several people pointed me at the WCAG formulas to determine luminance and contrast ratio. These are probably good criteria to choose your color when designing a user interface. However, they are not enough to determine if displayed color will be readable. This means you can use them if you are a designer, but IMHO they are pretty weak for detecting and correcting colors you did not choose. Flattr this

26 October 2010

Sergio Talens-Oliag: Debian Squeeze, PowerPC and the Linux Containers

Two kids, their really busy mother and my paid job leave me without much time to blog or do Debian related work lately (well, at least on my free time, I do Debian related things at work, but mostly as a user, not as a developer). Anyway, a couple of weeks ago I decided it was time to upgrade my home servers to Squeeze and I did it, but it was harder than expected. At home I'm using two old laptops as servers, an old Aluminium PowerBook and an Asus EeePC; the Asus was installed to replace an older PowerBook (a really old one, BTW) that I was using as home server since my father gave it to me. The plan was to use OpenVZ on the Asus to move all the PowerPC services to a couple of Virtual Environments, but as I wanted to migrate and change almost all the services I never got enough free time to finish the job and when the old PowerBook hardware failed I replaced it with another PowerBook that I wasn't using anymore, but instead of reinstalling the machine I did a clean Lenny install using a kernel with support for linux-vserver (OpenVZ does not work on PowerPC) and transformed the old machine installation (it was an Etch installation at the time) into a Virtual Private Server that run on the new hardware. Having both systems running I upgraded the VPS to Lenny and, as usually happens, left the things as they were without consolidating the services into only one machine, as I initially planned. With this state of affairs I upgraded the Asus to Squeeze without much trouble (in fact I installed a kernel without OpenVZ support, as the services I use from this laptop were running on the host and not on a VE) and did the same with the PowerPC host, but to my surprise the linux-vserver VPS failed to start with a message that seemed to imply that the VServer support was not enabled. I should have filled a bug on the BTS then, but as I looked into how to solve the issue I found bugs saying that the meaning of the message was that I had no support for linux-vserver and I needed to start the VPS ASAP, as it was the machine that runs my SMTP server. Before doing a restore of my last backup I did some digging and found a lot of messages recommending to move OpenVZ and Linux-VServer virtual machines to LXC and decided to give it a try. First I built a container on the Asus and it worked OK, after that I did the same on the PowerPC, but the script failed; luckily the patch was trivial, the problem was on the /usr/lib/lxc/templates/lxc-debian script; it uses arch to get the Debian architecture, but for powerpc it gives ppc instead of powerpc, so it needs to be fixed on the script (Note to self: I have to submit bug + patch to the lxc package to fix it). After creating this container and trying it I tried to boot my old VPS with a LXC configuration: After a couple of tries I noticed that the system was not booting because it was missing the devices files needed; to fix it I copied the /dev directory of my first LXC test and using a chroot I also removed the udev packages from the container. After that last changes the machine booted as expected and all services were running OK. To summarize, I decided to do the move to LXC and fixed the configuration to boot the virtual machines on each restart: I know that LXC is still missing some functionality (I hate the way the container stop function kills everything instead of doing a run-level change, I guess I'll be using hacks until I move to a newer kernel with the proper support enters into Debian), but having the code on the mainline kernel is a great bonus and the user level utilities are good enough for my home needs... and I hope they'll arrive to a point where we'll be able to migrate the OpenVZ containers at work (we are using Proxmox and the support of the OpenVZ patchset is starting to worry us). On my next post: The Freak Firewall or The Story of a HA Firewall based on OpenBSD's pf running on Debian GNU/kFreeBSD hosts.

3 October 2010

Matt Zimmerman: The paradox of Steve Jobs

Steve Jobs is a name that comes up a lot when talking to businesspeople, especially in the technology industry. His ideas, his background, his companies, their products, and his personal style are intertwined in the folklore of tech. I have no idea whether most of it is fiction or not, and I write this with apologies to Mr. Jobs for using him as shorthand. I have never met him, and my point here has nothing to do with him personally. What I want to discuss is the behavior of people who invoke the myth of Steve Jobs. In my (entirely subjective) experience, it seems to me that there is a pattern which comes up again and again: People seem to want to discuss and emulate the worst of his alleged qualities. Jobs has been characterized as abusive to his employees, dismissive of his business partners, harshly critical of mistakes, punishingly secretive, and otherwise extremely difficult to work with. Somehow, it is these qualities which are put forward as worthy of discussion, inspiration and emulation. Is this a simple case of confusing correlation with causation? Do people believe that Steve Jobs is successful because of these traits? Perhaps it is a way of coping with one s own character flaws: if Jobs can get away with such misbehavior, then perhaps we can be excused from trying to improve ourselves. Or is there something more subtle going on here? Maybe this observation is an effect of my own cognitive biases, as it is only anecdotal. As with any successful person, Jobs surely has qualities and techniques which are worthy of study, and perhaps even emulation. Although direct comparison can be problematic, luminaries like Jobs can provide valuable inspiration. I d just like to hear more about what they re doing right. Perhaps this is an argument for drawing inspiration from people you know personally, rather than from second-hand or fictitious accounts of someone else s life. I ve been fortunate to be able to work with many different people, each with their own strengths, weaknesses and style. I ve seen those characteristics first-hand, so I also have the context to understand why they were successful in particular situations. If there s one thing I ve learned about leadership, it s that it s highly context-sensitive: what worked well in one situation can be disastrous in another. Is your company really that much like Apple? Probably not.

19 September 2010

Asheesh Laroia: When "free software" got a new name

On January 30, 1998, Wired.com gushed about the ethical underpinnings of the free software movement. The movement was growing:
Netscape's bold move last week [was] to free the source code of its browser is a prime-time endorsement for no-cost, build-it-yourself software.
The free software movement was in its second decade. In the first decade, corporations learned to tolerate it. In the late '90s, a transition was underway. Red Hat was one of the first companies ostensibly founded on free software principles. But as free software grew, some were concerned that its name was holding it back. The article explains with a link to a page within gnu.org:
But "free software" is an ambiguous term - there is a difference in meaning between the cultures of PC-based proprietary systems and the Net-centric UNIX worlds.
Michael Stutz, the author of the piece, surveyed the writing of Eric S. Raymond and interviewed luminaries like Bob Young, Russell Nelson, and Marc Andreessen. The article is about the creation of a new term for the freely-reusable code produced by the free software movement.
As proponents of free software often point out, while this software can be free-of-cost - that is, gratis - the real issue is about freedom, or human liberty. So it is really freed software.
Yes, that's right -- freed software. The emphasis is in the original. Most of us know the names Eric S. Raymond and Russ Nelson as people involved early-on in the Open Source Initiative. I guess January 1998 is before they decided on the "open source" name. Today, the community is divided into people who think it's important to say "free software" and the rest who call it "open source." We'd all agree with the following statement from the article:
"Freed software is a big win for society in general," said Russell Nelson.
And that's today's random page from the history books.

25 May 2010

Matt Zimmerman: The behavioral economics of free software

People who use and promote free software cite various reasons for their choice, but do those reasons tell the whole story? If, as a community, we want free software to continue to grow in popularity, especially in the mainstream, we should understand better the true reasons for choosing it especially our own. Some believe that it offers higher quality, that the availability of source code results in a better product with higher reliability. Although it s difficult to do an apples-to-apples comparison of software, there are certainly instances where free software components have been judged superior to their proprietary counterparts. I m not aware of any comprehensive analysis of the general case, though, and there is plenty of anecdotal evidence on both sides of the debate. Others prefer it for humanitarian reasons, because it s better for society or brings us closer to the world we want to live in. These are more difficult to analyze objectively, as they are closely linked to the individual, their circumstances and their belief system. For developers, a popular reason is the possibility of modifying the software to suit their needs, as enshrined in the Free Software Foundation s freedom 1. This is reasonable enough, though the practical value of this opportunity will vary greatly depending on the software and circumstances. The list goes on: cost savings, educational benefits, universal availability, social rewards, etc. The wealth of evidence of cognitive bias indicates that we should not take these preferences at face value. Not only are human choices seldom rational, they are rarely well understood even by the human themselves. When asked to explain our preferences, we often have a ready answer indeed, we may never run out of reasons but they may not withstand analysis. We have many different ways of fooling ourselves with regard to our own past decisions and held beliefs, as well as those of others. Behavioral economics explores the way in which our irrational behavior affects economies, and the results are curious and subtle. For example, the riddle of experience versus memory (TED video), or the several examples in The Marketplace of Perception (Harvard Magazine article). I think it would be illuminating to examine free software through this lens, and consider that the vagaries of human perception may have a very strong influence on our choices. Some questions for thought: If you re aware of any studies along these lines, I would be interested to read about them.

11 May 2010

MJ Ray: TravelWatch SouthWest General Meeting

Weeks ago, I was at a TravelWatchSW meeting in Taunton for Cooperatives-SW. The Chair s theme for the meeting was the journey was going so well, but is there trouble ahead? contemplating cuts due to the debt crisis and possible change of government. Lots of good people were present, so it was quite an illuminating meeting. First, we listened to Dr Gabriel Scally, Director of Public Health for the South West NHS, as he spoke on fat, exercise and transport He s a keen cyclist, so I asked for suggestions of how to get better parking at health centres and hospitals. Sadly, he had no easy answer. One tip: Primary Care Trusts control most money, so they re probably the best ones to persuade. The second keynote address was First Great Western s Projects and Planning Director Matthew Golton. He spoke about railway development, unsurprisingly. Rail is vital for co-operatives in the South West, as it s a greener way to travel the 250 miles length of this peninsula and avoids our poor congested roads. The big news is that class 150s (Sprinters) will replace the Exeter area 142s (Pacers/Railbuses) and there will be grade-seperated junctions at Reading by 2016 to reduce congestion between the various routes that converge at that point. There s bad news on the Intercity Express Programme and HST2 being delayed until after the election, but the original High Speed Train fleet should continue as reliably as ever. After lunch, there were a series of Just a minute comments on topics including: west country commuter services, staffing, Taunton nhs parking, axminster peak times, a TransWilts meeting. Finally before the reports and forum came the third and final keynote from Mike Lambden, head of corporate affairs for National Express Group. He outlined their plans including longer coaches, but asked for better coach stations. One strange mistake had been increasing the leg room on some coaches so much that they got complaints from shorter passengers that they couldn t reach the footrest any more oops! The forum was a scary discussion of rail under-provision planned in our region. Even if we only continue to grow rail use at the last decade s rates, overcrowding will increase. The forum ended with applause for all the hard work done by TWSW and the next meetings will be 9 October 2010 in Taunton, 5 March 2011, 1 October 2011.

13 January 2010

Matt Brubeck: Finding SI unit domain names with Node.js

I'm working on some ideas for finance or news software that deliberately updates infrequently, so it doesn't reward me for checking or reloading it constantly. I came up with the name "microhertz" to describe the idea. (1 microhertz once every eleven and a half days.) As usual when I think of a project name, I did some DNS searches. Unfortunately "microhertz.com" is not available (but "microhertz.org" is). Then I went off on a tangent and got curious about which other SI units are available as domain names. This was the perfect opportunity to try node.js so I could use its asynchronous DNS library to run dozens of lookups in parallel. I grabbed a list of units and prefixes from NIST and wrote the following script:
var dns = require("dns"), sys = require('sys');
var prefixes = ["yotta", "zetta", "exa", "peta", "tera", "giga", "mega",
  "kilo", "hecto", "deka", "deci", "centi", "milli", "micro", "nano",
  "pico", "femto", "atto", "zepto", "yocto"];
var units = ["meter", "gram", "second", "ampere", "kelvin", "mole",
  "candela", "radian", "steradian", "hertz", "newton", "pascal", "joule",
  "watt", "colomb", "volt", "farad", "ohm", "siemens", "weber", "henry",
  "lumen", "lux", "becquerel", "gray", "sievert", "katal"];
for (var i=0; i<prefixes.length; i++)  
  for (var j=0; j<units.length; j++)  
    checkAvailable(prefixes[i] + units[j] + ".com", sys.puts);
   
 
function checkAvailable(name, callback)  
  var resolution = dns.resolve4(name);
  resolution.addErrback(function(e)  
    if (e.errno == dns.NXDOMAIN) callback(name);
   )
 
Out of 540 possible .com names, I found 376 that are available (and 10 more that produced temporary DNS errors, which I haven't investigated). Here are a few interesting ones, with some commentary: To get the complete list, just copy the script above to a file, and run it like this: node listnames.js Along the way I discovered that the API documentation for Node's dns module was out-of-date. This is fixed in my GitHub fork, and I've sent a pull request to the author Ryan Dahl.

31 December 2009

John Goerzen: My Reading List for 2010

I can hear the question now: What kind of guy puts The Iliad and War and Peace on a list of things to read for fun? Well, me. I think that reading things by authors I ve never read before, people that take positions I haven t heard of before or don t agree with, or works that are challenging, will teach me something. And learning is fun. My entire list for 2010 is at Goodreads. I ve highlighted a few below. I don t expect to read all 34 books on the Goodreads list necessarily, but there is the chance. The Iliad by Homer, 750BC, trans. by Alexander Pope, 704 pages. A recent NPR story kindled my interest in this work. I m looking forward to it. The Oxford History of the Classical World by Boardman, Griffin, and Murray, 1986, 882 pages. It covers ancient Greece and Rome up through the fall of the Roman empire. The Fires of Heaven (Wheel of Time #5) by Robert Jordan, 1994, 912 pages. I ve read books 1 through 4 already, and would like to continue on the series. War and Peace by Lev Leo Nikolayevich Tolstoy, 1869, 1392 pages. Been on my list for way too long. Time to get to it. Haven t read anything by Tolstoy before. The Politics of Jesus by John Howard Yoder, 1972, 2nd ed., 270 pages. Aims to dispel the notion of Jesus as apolitical. An Intimate History of Humanity by Theodore Zeldin, 1996, 496 pages. Picked this up at Powell s in Portland on a whim, and it s about time I get to it. The Myth of a Christian Nation: How the Quest for Political Power Is Destroying the Church by Gregory A. Boyd, 2007, 224 pages. An argument that the American evangelical church allowed itself to be co-opted by the political right (and some on the left) and argues this is harmful to the church. Also challenges the notion that America ever was a Christian nation. Daily Life in Ancient Rome: The People and the City at the Height of the Empire, by Jerome Carcopino, 2003, 368 pages. I ve always been fascinated with how things were on the ground rather than at the perspective of generals and kings, and this promises to be interesting. Slavery, Sabbath, War, and Women: Case Issues in Biblical Interpretation (Conrad Grebel Lectures) by Willard M. Swartley, 1983, 368 pages. Looking at how people have argued from different Biblical perspectives about various issues over the years. To the Lighthouse by Virginia Woolf, 1927, 252 pages. I can t believe I ve never read Woolf before. Yet another one I m really looking forward to. Tales of the Jazz Age by F. Scott Fitzgerald, 1922, 319 pages. Per Goodreads: This book of five confessional essays from the 1930s follows Fitzgerald and his wife Zelda from the height of their celebrity as the darlings of the 1920s to years of rapid decline leading to the self-proclaimed Crack Up in 1936. Ulysses by James Joyce, 1922 (1961 unabridged version), 783 pages. The Future of Faith by Harvey Cox, 2009, 256 pages. Per Goodreads, Cox explains why Christian beliefs and dogma are giving way to new grassroots movements rooted in social justice and spiritual experience. Heard about this one in an interview with Diane Rehm. Being There by Jerzy Kosi ski, 1970, 128 pages. Jesus: Uncovering the Life, Teachings, and Relevance of a Religious Revolutionary by Marcus Borg, 2006, 352 pages. Whether or not you agree with Borg, this has got to be a thought-provoking title. The Three Musketeers by Alexandre Dumas, 1844, 640 pages. The Book of Tea by Kakuzo Okakura, 1906, 154 pages. Per Goodreads: In 1906 in turn-of-the century Boston, a small, esoteric book about tea was written with the intention of being read aloud in the famous salon of Isabella Gardner. It was authored by Okakura Kakuzo, a Japanese philosopher, art expert, and curator. Little known at the time, Kakuzo would emerge as one of the great thinkers of the early 20th century, a genius who was insightful, witty and greatly responsible for bridging Western and Eastern cultures. Nearly a century later, Kakuzo s The Book of Tea is still beloved the world over. Interwoven with a rich history of tea and its place in Japanese society is poignant commentary on Eastern culture and our ongoing fascination with it, as well as illuminating essays on art, spirituality, poetry, and more. More of my list is at Goodreads.

1 December 2009

Adrian von Bidder: Toys, Number Three

Biggest, in terms of money involved, piece of equipment is a nice piece of glass to put in front of our camera. Since my wife had been using a Canon SLR since forever (and we stayed with that when moving from the EOS 300 to the current EOS 40D), the choices to upgrade from the kit EF-S 17-85mm f/4-5.6 basically have been: In the end I got the EF 24-70mm f/2.8 L USM, because the EOS 40D is no high ISO monster (we don't plan to upgrade immediately.) Also: I bought the Tokina AT-X 116 PRO DX (11-16mm, f/2.8) a while back and we're quite happy with that so wide-angle is covered. And since we're often shooting indoors (family and other events), f/2.8 is a big plus. On the tele end, there's quite a gap from the 24-70 to the old EF 100-300mm f/4.5-5.6 USM [age of the page to reflect the age of the lens ;-) ] but then the latter is probably the next lens to be replaced anyway. Since I only got 24-70 I can not directly compare these lenses. But after a few early tests I think I'm happy with the 24-70: while it seems to be a bit soft wide open at 24mm and at 70mm, it seems to be very sharp center to edge even at f/2.8 when used in the 35-50mm range. (Note that the 40D is a crop sensor camera, so I'm nicely using the sharp center area of a lens designed for full frame!) Yes, it's huge, so for casual walking around the 17-85mm will probably still get some use. Time will tell. And since it can, potentially at least, be used on a a EOS 5D Mk II, I now have another gadget to covet. Although this would mean giving up the Tokina 11-16mm. Anyway, not for quite some time yet... (The test also included the venerable EF 50mm f/1.8, Mk I, and I must say I'm quite surprised how sharp that one is in the f/4 to f/11 range.)

28 November 2009

Russell Coker: New RFID Passport


Above is the picture of the RFID device in my new UK passport. The outer wire loop is 72mm * 42mm which is by far the largest RFID device I ve seen. It appears that they want the passport to be RFID readable from distances that are significantly greater than those which are typically used for store security RFID devices. I couldn t properly capture the text on the page which says THIS PAGE IS RESERVED FOR OFFICIAL OBSERVATIONS, IF ANY . The plastic layer that protects the RFID device leaves a 3mm margin that could potentially be used for official observations. Assuming that they don t write in really small letters I guess this means that either they don t make official observations on the passports nowadays or that any such observations are stored electronically where the subject has no good way of discovering them and objecting. Above is a picture of the front cover of my passport which shows the RFID logo. Both images have links to the full resolution pictures. On Sunday I leave for a two week business trip to San Francisco, this will be my first trip with my new passport. I m thinking of taking some aluminium foil to wrap around my passport when it s not being used. I don t expect to really gain any benefit from doing so, it s a matter of principle.

21 November 2009

Biella Coleman: How Far Can it Go?

During the month of October I spent quite a bit of time thinking about the past, present, and future of F/OSS. This was due in part to participation in a Berkman Center event on Free Culture, where we discussed the historical arc of Free Software to Free Culture, the relationships between them (and their differences), and also the content and meaning each. Over the years, what I have found so interesting about Free Software is how it left its enclave to inspire countless groups into rethinking the politics and ethics of production and access and yet, as I raised in this pod-cast interview (due to the prompting of my interviewer, Elizabeth Stark), Free Software and/or Free Culture is still pretty bounded and contained phenomenon especially when compared to something like the existing consciousness over the environmental movement, which many folks know about and understand even when and if they are not involved in doing anything for the movement. I always ask my first year students whether they know what Free Software or Free Culture is and 9 out 10 stare at me with those blank eyes that basically speak in silence: no. Now, there are a group of activists, many located in Europe, a number of them with deep roots in the social justice movement who are taking Free Culture down a different path, trying to expand its meaning and conjoin it to social justice issues, build a broad set of coalitions across the political spectrum so as to override the fragmentation that is so characteristic to contemporary political moment, and use FC as an opportunity to critique the market fundamentalism of the last few decades. If you are interested in these issues, take a look at their charter: they are looking for comments (critical and constructive) as well as endorsements (here is the long version). I myself have a few comments, for example, I think it is worth noting something like the limits of what FC can do, that even if in many ways it can be activated to do good in the world, it is also best to highlight in the same swoop that FC is not some political panacea and has limits. For example some groups in the world, notably some indigenous communities abide by a different logic of access and culture, whereby full access is not culturally or ethically desirable, as the work of Kim Christen has illuminated. I also wonder in what ways issues of labor might be addressed more forcefully, and though they briefly raise the question of environmental sustainability, it is worth expanding these more directly and deeplyas this article by Toby Miller and Richard Maxwell make clear. There is more to say but I will leave it here for now and just say it is really great to see Free Culture taken down another political path that is rooted in coalition building.

Biella Coleman: How Far Can it Go?

During the month of October I spent quite a bit of time thinking about the past, present, and future of F/OSS. This was due in part to participation in a Berkman Center event on Free Culture, where we discussed the historical arc of Free Software to Free Culture, the relationships between them (and their differences), and also the content and meaning each. Over the years, what I have found so interesting about Free Software is how it left its enclave to inspire countless groups into rethinking the politics and ethics of production and access and yet, as I raised in this pod-cast interview (due to the prompting of my interviewer, Elizabeth Stark), Free Software and/or Free Culture is still pretty bounded and contained phenomenon especially when compared to something like the existing consciousness over the environmental movement, which many folks know about and understand even when and if they are not involved in doing anything for the movement. I always ask my first year students whether they know what Free Software or Free Culture is and 9 out 10 stare at me with those blank eyes that basically speak in silence: no. Now, there are a group of activists, many located in Europe, a number of them with deep roots in the social justice movement who are taking Free Culture down a different path, trying to expand its meaning and conjoin it to social justice issues, build a broad set of coalitions across the political spectrum so as to override the fragmentation that is so characteristic to contemporary political moment, and use FC as an opportunity to critique the market fundamentalism of the last few decades. If you are interested in these issues, take a look at their charter: they are looking for comments (critical and constructive) as well as endorsements (here is the long version). I myself have a few comments, for example, I think it is worth noting something like the limits of what FC can do, that even if in many ways it can be activated to do good in the world, it is also best to highlight in the same swoop that FC is not some political panacea and has limits. For example some groups in the world, notably some indigenous communities abide by a different logic of access and culture, whereby full access is not culturally or ethically desirable, as the work of Kim Christen has illuminated. I also wonder in what ways issues of labor might be addressed more forcefully, and though they briefly raise the question of environmental sustainability, it is worth expanding these more directly and deeplyas this article by Toby Miller and Richard Maxwell make clear. There is more to say but I will leave it here for now and just say it is really great to see Free Culture taken down another political path that is rooted in coalition building.

20 April 2009

Bdale Garbee: TeleMetrum First Flight

Today was the season opener for Tripoli Colorado at their launch site on the buffalo ranch near Hartsel, Colorado. After huge snowfalls along the front range of the mountains in the last few days, we were a little tentative about going, but it turned out to be nearly perfect flying conditions! There was apparently much less snow this week on the high plains west of the front range, and by this morning the snow had almost entirely disappeared, the skies were clear and blue, and the winds were calm except for a few gusty bursts in the afternoon. This launch site is really something special, at 8800 feet above mean sea level, in the middle of a huge area of wide-open short grass prairie. We love flying there, and today the drive to and from the launch site through the snow-covered Colorado Rockies was just beautiful! Son Robert and I flew three rockets today, including his LOC/Precision Lil Nuke with added payload section on an Aerotech G54W-M reload, and our Polecat Aerospace 5.5 inch Goblin kit on one of the relatively new Aerotech I245G-M "Mojave Green" green-flame reloads. But by far the highlight of the day was flying my Giant Leap Vertical Assault on a Cesaroni J335 red-flame reload... with serial number 1 of TeleMetrum on board collecting our first-ever flight data! From the ground, it looked like a textbook perfect dual-deploy flight, with a small drogue chute out at apogee around 4000 feet above ground, and the main chute deploying at 700 feet above ground for a beautiful, soft landing only a couple minutes walk from the launch rail. The ejection events were controlled by a PerfectFlite MiniAlt/WD. The data recovered from it shows a big negative spike in the altitude right at apogee, coincident with firing of the apogee deployment charge. I have to assume this means the aft bulkhead on the avionics bay in the reconstructed coupler section isn't sealing well, and some pressure from the apogee deployment charge leaked in to the avionics bay "fooling" the altimeter into thinking the altitude was lower for a sample or two. Clearly, that needs to get fixed before that airframe flies again. Further evidence that we had a mighty kick from the apogee ejection charge was discovered when we went to clean the motor casing. The Cesaroni reloads are packaged in a plastic liner tube that slides into the reusable aluminum case. When we pulled the spent reload out of the case, it was significantly shorter than when it was loaded, suggesting that the ejection charge forced the forward closure on the reload backwards compressing the heat-softened plastic. This could be evidence that the reconstructed coupler was slow to separate from the booster airframe due to excessive friction? The flight data recovered from the TeleMetrum board looks great until apogee, when the data collection stopped. Since I flew firmware that was compiled and flashed on the flight line from Keith's latest git commit as of this morning, it's entirely possible that there was a software bug that caused data collection to terminate at apogee. We'll investigate that. But I personally think what actually happened is that we experienced a temporary short in the power supply at the time the apogee ejection charge fired. On extraction of the electronics sled from the avionics bay this evening, I noticed that one of the mounting screws has gone missing. If it wasn't snug enough, and vibrated loose during flight, it could have been torn loose at the time of the ejection event and shorted something as it rattled around in the avionics bay. The screw is now just missing, but may have fallen out when we were extracting data on the flight line just after the flight without being noticed at the time. So I'm not inclined to worry much about this, at least until we can get some more flight data! Keith post-processed the raw flight data and presented me with a plot showing two traces, acceleration and barometric altitude. The data from the accelerometer closely matches the published data for the motor we flew, which is a really cool result, and my 10-year-old son enjoyed figuring out why the rocket showed negative acceleration after the motor burn out but was still climbing. (See, there really is some science education hidden in the fun!) All in all, we had a great time, and it's totally cool to have data from a first flight of TeleMetrum! Can't wait to fly it again!

3 April 2009

Russell Coker: Google Server Design

Cnet has an article on the design of the Google servers [1]. It seems that their main servers are 2RU systems with a custom Gigabyte motherboard that takes only 12V DC input. The PSUs provide 12V DC and each system has a 12V battery backup to keep things running before a generator starts in the event of a power failure. They claim that they get better efficiency with small batteries local to the servers than with a single large battery array. From inspecting the pictures it seems that the parts most likely to fail are attached by velcro. The battery is at one end, the PSU is at the other, and the hard disks are at one side. It appears that it might be possible to replace the PSU or the battery while the server is operational and in the rack. The hard disks are separated from the motherboard by what appears to be a small sheet of aluminium which appears to give two paths for air to flow through the system. The thermal characteristics of the motherboard (CPUs) and the hard drives are quite different to having separate air flows seems likely to allow warmer air to be used in cooling the system (thus saving power). Google boast that their energy efficiency now matches what the rest of the industry aims to do by 2011! The servers are described as taking up 2RU, which gives a density of one CPU per RU. This surprised me as some companies such as Servers Direct [2] sell 1RU servers that have four CPUs (16 cores). Rackable systems [3] (which just bought the remains of SGI) sells 2RU half-depth systems (which can allow two systems in 2RU of rack space) that have four CPUs and 16 cores (again 4 CPUs per RU). Rackable systems also has a hardware offering designed for Cloud Computing servers, those CloudRack [4] systems have a number of 1RU trays. Each CloudRack tray can have as many as two server boards that has two CPUs (4 CPUs in 1RU) and 8 disks. While I wouldn t necessarily expect that Google would have the highest density of CPUs per rack, it did surprise me to see that they have 1/4 the CPU density of some commercial offerings and 1/8 the disk density! I wonder if this was a deliberate decision to use more server room space to allow slower movement of cooling air and thus save energy. It s interesting to note that Google have been awarded patents on some of their technology related to the batteries. Are there no journalists reading the new patents? Surely anyone who saw such patents awarded to Google could have published most of this news before Cnet got it. Now, I wonder how long it will take for IBM, HP, and Dell to start copying some of these design features. Not that I expect them to start selling their systems by the shipping crate.

15 March 2009

Chris Lamb: Joachim Raff and the orchestration of Bach's Chaconne

Earlier today I stumbled across a recording of a work I had not heard in a few years by the obscure Swiss-German composer Joachim Raff. The composition is an orchestral transcription of the Chaconne from Bach's Partita in D minor, without doubt the finest work ever written for solo violin. This piece has been transcribed countless times; the source material is clearly a masterpiece, but what makes the movement really attractive is its implicit nature - dispite the violin's limited facilities for counterpoint and polyphony in general, Bach succeeds in alluding to a multitude of voices within the same line. Larry Solomon provides a brief overview of what to look out for. Raff's transcription thus endeavours to render the implicit explicit; for example, where there is the implication of a held note, it can played for its implied duration. Indeed, Raff believed that Chaconne was in itself a "reduction" from an orchestrated original. Whilst I don't share that literal belief, it is certainly good working model for a transcription. And what a transcription! I've often thought of the romantic transcriptions such as these to be a guilty pleasures; they clearly violate any sensibilities I have about performing early music. However, what is most surprising (and guilt-relieving) about Raff's effort is the sheer amount of novel material involved. Here is an example starting at bar 216 - I have overlayed Bach's original line with a melody of Raff's own invention:
http://chris-lamb.co.uk/wp-content/2009/raff_chaconne.png
There are countless other examples which, following Bach's example, vary greatly in their subtlety. No prizes for guessing why I chose this excerpt though. However, dispite these additions I feel some crucial aspects are actually lost in translation. Firstly, we can muse over whether Bach would have approved of such a rendering - not wishing to dwell too deeply in well-tread arguments, many would point to Bach's re-use of his own material (as well as his transcriptions of other composers) to illustrate that he was not against the practice, but I find that argument difficult to apply to the Chaconne - the dualism between the implicit and the explicit would have greatly appealed to Bach, so to rob the work of it would seem to be doing him a serious discourtesy. Furthermore, the implications in Bach's work are certainly not followed to the same conclusions by all - a truly solo musician has the advantage of being able to make their own decisions about the music, perhaps even on the spur of the moment. In contrast, an ensemble is effectively forced to obey Hr. Raff's inferences. This lack of spontaneity is particularly damning; I am sure one would tire of differing interpretations of Raff's transcription quicker than one would of Bach's original. Lastly (and certainly most subjectively) there is something disarmingly solitary about the original work which is only amplified by being played solo - dispite my inability to provide a satisfactory rendition I would find performing this piece with (or even to) others rather discomforting. In conclusion, I would recommend listening to this work (and the rest of the CD). I have found it extremely illuminating, if only about Bach's original than of Raff's.

Next.

Previous.